The Future of the CPU - by Phil97
                   ---------------------------------

"Intel  Octium:   200  Ghz  /  1024 bit CPU"..  Will we be seeing such a
device hyped in the press of the year 2005?  Or is the exponential graph
of CPU performance about to level off?  Consider...

Data Bus Width: 
---------------

We've  gone from 4bit to 8,16,32, and now 64 bit.  But how far can we or
do  we  need  to  go?   In  the  8  bit  days the limits were due to the
materials  and technology available but now its the laws of physics that
are   imposing   the   limits.    For   a  start  imagine  the  physical
practicalities  of  routing high density data buses around inside a CPU.
Transistors  are  already  only a few hundred atoms across and electrons
are  weird creatures that start to display unwanted quantum interference
effects  at  high densities / frequencies.  There is also the problem of
actually  etching  the  silicon  to make the CPU.  The minute tracks are
currently  created  by  focusing  the  image  of  a  Ultraviolet exposed
photographic  mask  onto  the silicon wafer.  But higher track/component
densities  require  ever  tighter  bandwidths of radiation to create the
image.   Such  higher  frequencies  such  as  X-rays  cannot be focussed
accurately.  Ok, you could make the CPU physically bigger but that would
mean  longer  data  paths  increasing the time it takes for data to move
around  and  also  creating  more  resistance,  requiring  more power to
overcome, and in turn generating more heat.

In  any  case  how  wide do data buses need to be?  With RISC technology
each  instruction is only a few bytes, so other than filling instruction
pipelines  from  memory,  (which  could be done with a dedicated circuit
anyway) there is not much point in lugging more than 8 consecutive bytes
around at once...

Clock speed: 
------------

400Mhz  Pentium  2's  have already been rumoured.  Does this mean we can
expect  1Ghz  CPUs  soon?   And  then on into the Terrahertz range?  The
"speed  of  electricity"  (no  quite  the  speed  of light) puts a final
barrier on clock speed, but there are other factors that set limits long
before  this;  the switching speed of transistors for example.  The more
power  allotted the faster they switch, unfortunately, more power = more
heat.   At 400Mhz, we are almost halfway toward the Gigahertz band (this
is  the  frequency  that cell phones and microwave ovens work at) and at
such  frequencies electro-magnetic radiation is readily generated - this
would cause chaos at high power levels inside a high density CPU.

The  short-term solution to the heat problem, first used in the Pentium,
was  to  drop the operating voltage (from 5 to 3 volts).  You could go a
bit  lower than this but there is a limit:  silicon transistors need 0.7
volts  before  they'll do anything, plus you need a safe noise margin to
identify a definate "0" voltage level from a "1" voltage level..

Conclusion:
-----------

Superconducting  materials  may  one  day provide a solution to the heat
problems  but  this technology still seems a long way off.  Further into
the  future,  optical  CPUs  may  be  developed  allowing speed-of-light
computation and then that will truly be the end of the matter!

But back to today, the PC philosophy is all wrong:  It still relies on a
single  "workhorse" at the centre of the system.  Parallel processing in
one  form  or  another  is  the way forward.  Even the Amiga had this of
sorts  (Copper/blitter/DMA audio) and look at the games consoles:  chock
full  of custom harware with a bog basic CPU and yet still outperforming
top range PCs in everything that matters...



-- 
              _______ __ __  _  _____  __  _______ ________
         .--- \_ __  \\ | /\|_|\\   /\/_/\/  ____/\\____ _/\--.
         |    / |_/  //   \ / \ /  / /\_\/\  \_| \/ \__/ |\/  |
         |   /   ___//  |  \   \   \/__    \___   \  /   _/\  |
         |  /___|\__/___|___\__/______/\    \__|___\-/___|\/  |
         `--\___\|  \___\____\_\______\/-------\____\\___\|---'


end